95 research outputs found

    Pairwise Covariates-adjusted Block Model for Community Detection

    Full text link
    One of the most fundamental problems in network study is community detection. The stochastic block model (SBM) is one widely used model for network data with different estimation methods developed with their community detection consistency results unveiled. However, the SBM is restricted by the strong assumption that all nodes in the same community are stochastically equivalent, which may not be suitable for practical applications. We introduce a pairwise covariates-adjusted stochastic block model (PCABM), a generalization of SBM that incorporates pairwise covariate information. We study the maximum likelihood estimates of the coefficients for the covariates as well as the community assignments. It is shown that both the coefficient estimates of the covariates and the community assignments are consistent under suitable sparsity conditions. Spectral clustering with adjustment (SCWA) is introduced to efficiently solve PCABM. Under certain conditions, we derive the error bound of community estimation under SCWA and show that it is community detection consistent. PCABM compares favorably with the SBM or degree-corrected stochastic block model (DCBM) under a wide range of simulated and real networks when covariate information is accessible.Comment: 41 pages, 6 figure

    Radiometrically-Accurate Hyperspectral Data Sharpening

    Get PDF
    Improving the spatial resolution of hyperpsectral image (HSI) has traditionally been an important topic in the field of remote sensing. Many approaches have been proposed based on various theories including component substitution, multiresolution analysis, spectral unmixing, Bayesian probability, and tensor representation. However, these methods have some common disadvantages, such as that they are not robust to different up-scale ratios and they have little concern for the per-pixel radiometric accuracy of the sharpened image. Moreover, many learning-based methods have been proposed through decades of innovations, but most of them require a large set of training pairs, which is unpractical for many real problems. To solve these problems, we firstly proposed an unsupervised Laplacian Pyramid Fusion Network (LPFNet) to generate a radiometrically-accurate high-resolution HSI. First, with the low-resolution hyperspectral image (LR-HSI) and the high-resolution multispectral image (HR-MSI), the preliminary high-resolution hyperspectral image (HR-HSI) is calculated via linear regression. Next, the high-frequency details of the preliminary HR-HSI are estimated via the subtraction between it and the CNN-generated-blurry version. By injecting the details to the output of the generative CNN with the low-resolution hyperspectral image (LR-HSI) as input, the final HR-HSI is obtained. LPFNet is designed for fusing the LR-HSI and HR-MSI covers the same Visible-Near-Infrared (VNIR) bands, while the short-wave infrared (SWIR) bands of HSI are ignored. SWIR bands are equally important to VNIR bands, but their spatial details are more challenging to be enhanced because the HR-MSI, used to provide the spatial details in the fusion process, usually has no SWIR coverage or lower-spatial-resolution SWIR. To this end, we designed an unsupervised cascade fusion network (UCFNet) to sharpen the Vis-NIR-SWIR LR-HSI. First, the preliminary high-resolution VNIR hyperspectral image (HR-VNIR-HSI) is obtained with a conventional hyperspectral algorithm. Then, the HR-MSI, the preliminary HR-VNIR-HSI, and the LR-SWIR-HSI are passed to the generative convolutional neural network to produce an HR-HSI. In the training process, the cascade sharpening method is employed to improve stability. Furthermore, the self-supervising loss is introduced based on the cascade strategy to further improve the spectral accuracy. Experiments are conducted on both LPFNet and UCFNet with different datasets and up-scale ratios. Also, state-of-the-art baseline methods are implemented and compared with the proposed methods with different quantitative metrics. Results demonstrate that proposed methods outperform the competitors in all cases in terms of spectral and spatial accuracy

    Spectral clustering via adaptive layer aggregation for multi-layer networks

    Full text link
    One of the fundamental problems in network analysis is detecting community structure in multi-layer networks, of which each layer represents one type of edge information among the nodes. We propose integrative spectral clustering approaches based on effective convex layer aggregations. Our aggregation methods are strongly motivated by a delicate asymptotic analysis of the spectral embedding of weighted adjacency matrices and the downstream kk-means clustering, in a challenging regime where community detection consistency is impossible. In fact, the methods are shown to estimate the optimal convex aggregation, which minimizes the mis-clustering error under some specialized multi-layer network models. Our analysis further suggests that clustering using Gaussian mixture models is generally superior to the commonly used kk-means in spectral clustering. Extensive numerical studies demonstrate that our adaptive aggregation techniques, together with Gaussian mixture model clustering, make the new spectral clustering remarkably competitive compared to several popularly used methods.Comment: 71 page

    The impact of two-way FDI on total factor productivity in China and countries of the belt and road initiative

    Get PDF
    This study utilizes the DEA-Malmquist index method to measure the total factor productivity of 36 Belt and Road countries and establish a dynamic panel model. This study carries out an empirical analysis of whether two-way investment in China and the Belt and Road Initiative can improve total factor productivity. First, the technology spillover of the home country has a significant effect on improving total factor productivity and the technical efficiency index of countries along the route, while the technology spillover of host countries has no significant effect on total factor productivity. Second, in Asia, the technology spillover of host countries has a significant effect on total factor productivity, while the technology spillover of the home country has no significant effect on total factor productivity. Finally, in Europe, the spillover effect of technology in the home country is beneficial to the improvement of resource allocation. Meanwhile, the spillover effect of technology in host countries is beneficial to the improvement of total factor productivity and the technical efficiency index. Therefore, China should continue to increase its investment in Belt and Road countries

    Experimental analysis of 3D flow structures around a floating dike

    Get PDF
    Floating dikes have several advantages over spur dikes including less influence on riverine sediment transport, bed topography, and ecosystems, and a good adaptability to fluvial conditions. Despite these advantages, floating dikes have not been used in many river regulation schemes due to the limited understanding of the 3D flow structures around floating dikes. In this study, a series of experiments were conducted to investigate the 3D flow structures around floating dikes. Results show that, after installing a floating dike on one side of a flume, the surface water flow is deflected to the opposite side of the flume, and a backflow develops around the outer and downstream side of the dike, where both the vertical turbulent intensity and the absolute magnitude of the Reynolds stress are relatively large. Due to the blocking effect of the dike, the cross-sectional area decreases, causing an increase in velocities below and alongside the dike, as well as a decrease in velocities upstream of the dike. Increasing the submerged depth or length of the dike results in an increase in flow velocity adjacent to the dike, as well as an increase in the vertical or lateral scale of the backflow. On the contrary, increasing the dike thickness leads to a weakening or disappearance of the backflow, along with a decrease in the acceleration rate of flow adjacent to the dike

    ELUCID IV: Galaxy Quenching and its Relation to Halo Mass, Environment, and Assembly Bias

    Full text link
    We examine the quenched fraction of central and satellite galaxies as a function of galaxy stellar mass, halo mass, and the matter density of their large scale environment. Matter densities are inferred from our ELUCID simulation, a constrained simulation of local Universe sampled by SDSS, while halo masses and central/satellite classification are taken from the galaxy group catalog of Yang et al. The quenched fraction for the total population increases systematically with the three quantities. We find that the `environmental quenching efficiency', which quantifies the quenched fraction as function of halo mass, is independent of stellar mass. And this independence is the origin of the stellar mass-independence of density-based quenching efficiency, found in previous studies. Considering centrals and satellites separately, we find that the two populations follow similar correlations of quenching efficiency with halo mass and stellar mass, suggesting that they have experienced similar quenching processes in their host halo. We demonstrate that satellite quenching alone cannot account for the environmental quenching efficiency of the total galaxy population and the difference between the two populations found previously mainly arises from the fact that centrals and satellites of the same stellar mass reside, on average, in halos of different mass. After removing these halo-mass and stellar-mass effects, there remains a weak, but significant, residual dependence on environmental density, which is eliminated when halo assembly bias is taken into account. Our results therefore indicate that halo mass is the prime environmental parameter that regulates the quenching of both centrals and satellites.Comment: 21 pages, 16 figures, submitted to Ap

    Predictive value of remnant-like particle cholesterol in the prediction of long-term AF recurrence after radiofrequency catheter ablation

    Get PDF
    ObjectiveThe relationship between remnant-like particle cholesterol (RLP-C) levels and the progression of atrial fibrillation (AF) is not known. This research aimed to explore the association of RLP-C with long-term AF recurrence events post-radiofrequency catheter ablation (RFCA) of AF.MethodsIn total 320 patients with AF who were subjected to the first RFCA were included in this research. Baseline information and laboratory data of patients were retrospectively collected, and a 1-year follow-up was completed. The follow-up endpoint was defined as an AF recurrence event occurring after 3 months. Afterward, a multivariate Cox regression model was constructed to analyze the risk factors that affect AF recurrence.ResultsAF recurrence occurred in 103 patients (32.2%) within 3–12 months after RFCA. Based on the multivariate Cox regression analysis, Early recurrence (ER) [hazard ratio (HR) =1.57, 95% confidence interval (CI): 1.04–2.36, P = 0.032)], coronary artery disease (CAD) (HR = 2.03, 95% CI: 1.22–3.38, P = 0.006), left atrium anterior-posterior diameter (LAD) (HR = 1.07, 95% CI: 1.03–1.10, P < 0.001), triglyceride (TG) (HR = 1.51, 95% CI: 1.16–1.96, P = 0.002), low-density lipoprotein cholesterol (LDL-C) (HR = 0.74, 95% CI: 0.55–0.98, P = 0.036), and RLP-C (HR = 0.75 per 0.1 mmol/L increase, 95% CI: 0.68–0.83, P < 0.001) were linked to the risk of AF recurrence. Among them, the relationship between RLP-C and AF recurrence was found for the first time. The predictive value of RLP-C for AF recurrence was analyzed utilizing receiver operating characteristic (ROC) curves [area under the curve (AUC) = 0.81, 95% CI: 0.77–0.86, P < 0.001]. Subsequently, the optimal threshold value of RLP-C was determined to be 0.645 mmol/L with a sensitivity of 87.4% and a specificity of 63.6% based on the Youden index. Additionally, Kaplan–Meier analysis indicated a lower AF recurrence rate in the >0.645 mmol/L group than in the ≤0.645 mmol/L group (Log-rank P < 0.001).ConclusionLow levels of RLP-C are associated with a higher risk of AF recurrence post-RFCA, suggesting that RLP-C may be a biomarker that helps to identify long-term AF recurrence
    • …
    corecore